en
AI Ranking
每月不到10元,就可以无限制地访问最好的AIbase。立即成为会员
Home
News
Daily Brief
Income Guide
Tutorial
Tools Directory
Product Library
en
AI Ranking
Search AI Products and News
Explore worldwide AI information, discover new AI opportunities
AI News
AI Tools
AI Cases
AI Tutorial
Type :
AI News
AI Tools
AI Cases
AI Tutorial
2024-03-01 10:46:13
.
AIbase
.
6.2k
Hugging Face AI Platform Exposes 100 Malicious Code Execution Models
Researchers have discovered approximately 100 malicious machine learning models on the Hugging Face AI platform that could allow attackers to inject harmful code onto user machines. These malicious AI models exploit methods like PyTorch to execute harmful code, increasing security risks. AI developers should use new tools like Huntr to enhance the security of AI models. The identified malicious models highlight the risks that malicious AI models pose to user environments, necessitating ongoing vigilance and enhanced security.
2023-10-25 16:06:04
.
AIbase
.
2.5k
Study Finds Potential Risks of Malicious Code Manipulation in ChatGPT
Research from the UK reveals that ChatGPT could be used to generate malicious code. Several commercial AI tools have security vulnerabilities that could endanger database security. Researchers warn that the potential risks require attention. Some companies have adopted recommendations to fix security flaws, but there is still a need to strengthen cybersecurity strategies.
2023-08-10 11:32:24
.
AIbase
.
304
IBM Research: AI Chatbots Easily Deceived into Generating Malicious Code
{1: IBM Research found that large language models such as GPT-4 can be easily deceived to generate malicious code or provide false security recommendations. 2: Researchers discovered that a basic understanding of English and some background knowledge about the model's training data is sufficient to easily deceive AI chatbots. 3: Different AI models vary in their susceptibility to deception, with GPT-3.5 and GPT-4 being more easily tricked.}